skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Li, Qisheng"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The lack of authentic stuttered speech data has significantly limited the development of stuttering friendly automatic speech recognition (ASR) models. In previous work, we collaborated with StammerTalk, a grassroots community of Chinese-speaking people who stutter (PWS), to collect the first stuttered speech dataset in Mandarin Chinese, containing 50 hours of conversational and command-recitation speech from 72 PWS. This work examines both the technical and social dimensions of the dataset. Through quantitative and qualitative analysis, as well as benchmarking and fine-tuning ASR models using the dataset, we demonstrate its technical value in capturing stuttered speech at an unprecedented scale and diversity – enabling better understanding and mitigation of fluency bias in ASR – and its social value in promoting self-advocacy and structural change for PWS in China. By foregrounding lived experiences of PWS in their own voices, we also see the potential of this dataset to normalize speech disfluencies and cultivate deeper empathy for stuttering within the AI research community. 
    more » « less
    Free, publicly-accessible full text available June 23, 2026
  2. Trained and optimized for typical and fluent speech, speech AI works poorly for people with speech diversities, often interrupting them and misinterpreting their speech. The increasing deployment of speech AI in automated phone menus, AI-conducted job interviews, and everyday devices poses tangible risks to people with speech diversities. To mitigate these risks, this workshop aims to build a multidisciplinary coalition and set the research agenda for fair and accessible speech AI. Bringing together a broad group of academics and practitioners with diverse perspectives, including HCI, AI, and other relevant fields such as disability studies, speech language pathology, and law, this workshop will establish a shared understanding of the technical challenges for fair and accessible speech AI, as well as its ramifications in design, user experience, policy, and society. In addition, the workshop will invite and highlight first-person accounts from people with speech diversities, facilitating direct dialogues and collaboration between speech AI developers and the impacted communities. The key outcomes of this workshop include a summary paper that synthesizes our learnings and outlines the roadmap for improving speech AI for people with speech diversities, as well as a community of scholars, practitioners, activists, and policy makers interested in driving progress in this domain. 
    more » « less
    Free, publicly-accessible full text available April 25, 2026
  3. null (Ed.)
    Best Paper Award for Outstanding Study Design 
    more » « less